8,084 research outputs found

    Marketing in SMEs: a '4Ps' self-branding model

    Get PDF
    Purpose – The purpose of this paper is to explore the extent to which traditional marketing theory and practice can be applied in small- and medium-sized enterprises (SMEs) and consider how owner-managers perceive their own role in marketing within a small business setting. Design/methodology/approach – A qualitative exploratory approach using semi-structured in-depth interviews amongst owner-managers of SMEs in the UK. Findings – SME marketing is effective in that it embraces some relevant concepts of traditional marketing, tailors activities to match its customers and adds its own unique attribute of self-branding as bestowed by the SME owner-manager. Research limitations/implications – The study was limited to the UK and to a small sample of SMEs and as such the findings are not necessarily generalisable. Originality/value – A “4Ps” model for SME self-branding is proposed, which encompasses the attributes of personal branding, (co)production, perseverance and practice

    Good Dog/Bad Dog: Dogs in Medieval Religious Polemics

    Get PDF

    Learning Design: reflections on a snapshot of the current landscape

    Get PDF
    The mounting wealth of open and readily available information and the swift evolution of social, mobile and creative technologies warrant a re-conceptualisation of the role of educators: from providers of knowledge to designers of learning. This need is being addressed by a growing trend of research in Learning Design. Responding to this trend, the Art and Science of Learning Design workshop brought together leading voices in the field and provided a forum for discussing its key issues. It focused on three thematic axes: practices and methods, tools and resources, and theoretical frameworks. This paper reviews some definitions of Learning Design and then summarises the main contributions to the workshop. Drawing upon these, we identify three key challenges for Learning Design that suggest directions for future research

    Rational Trust Modeling

    Get PDF
    Trust models are widely used in various computer science disciplines. The main purpose of a trust model is to continuously measure trustworthiness of a set of entities based on their behaviors. In this article, the novel notion of "rational trust modeling" is introduced by bridging trust management and game theory. Note that trust models/reputation systems have been used in game theory (e.g., repeated games) for a long time, however, game theory has not been utilized in the process of trust model construction; this is where the novelty of our approach comes from. In our proposed setting, the designer of a trust model assumes that the players who intend to utilize the model are rational/selfish, i.e., they decide to become trustworthy or untrustworthy based on the utility that they can gain. In other words, the players are incentivized (or penalized) by the model itself to act properly. The problem of trust management can be then approached by game theoretical analyses and solution concepts such as Nash equilibrium. Although rationality might be built-in in some existing trust models, we intend to formalize the notion of rational trust modeling from the designer's perspective. This approach will result in two fascinating outcomes. First of all, the designer of a trust model can incentivise trustworthiness in the first place by incorporating proper parameters into the trust function, which can be later utilized among selfish players in strategic trust-based interactions (e.g., e-commerce scenarios). Furthermore, using a rational trust model, we can prevent many well-known attacks on trust models. These two prominent properties also help us to predict behavior of the players in subsequent steps by game theoretical analyses

    Online Reputation Systems in Web 2.0 Era

    Get PDF
    Web 2.0 has transformed how reputation systems are designed and used by the Web. Based on a thorough review of the existing online reputation systems and their challenges in use, this paper studied a case of Amazon’s reputation system for the impacts of Web 2.0. Through our case study, several distinguished features of new generation reputation systems are noted including multimedia feedbacks, reviewer centered, folksonomy (use of tag), community contribution, comprehensive reputation, dynamic and interactive system etc.. These new developments promise a path that move towards a trustworthy and reliable online reputation system in the Web 2.0 era

    RAFCON: a Graphical Tool for Task Programming and Mission Control

    Full text link
    There are many application fields for robotic systems including service robotics, search and rescue missions, industry and space robotics. As the scenarios in these areas grow more and more complex, there is a high demand for powerful tools to efficiently program heterogeneous robotic systems. Therefore, we created RAFCON, a graphical tool to develop robotic tasks and to be used for mission control by remotely monitoring the execution of the tasks. To define the tasks, we use state machines which support hierarchies and concurrency. Together with a library concept, even complex scenarios can be handled gracefully. RAFCON supports sophisticated debugging functionality and tightly integrates error handling and recovery mechanisms. A GUI with a powerful state machine editor makes intuitive, visual programming and fast prototyping possible. We demonstrated the capabilities of our tool in the SpaceBotCamp national robotic competition, in which our mobile robot solved all exploration and assembly challenges fully autonomously. It is therefore also a promising tool for various RoboCup leagues.Comment: 8 pages, 5 figure

    Simulation Approach to Assess the Precision of Estimates Derived from Linking Survey and Administrative Records

    Get PDF
    Probabilistic record linkage implies that there is some level of uncertainty related to the classification of pairs as links or non-links vis-Ă -vis their true match status. As record linkage is usually performed as a preliminary step to developing statistical estimates, the question then is how does this linkage uncertainty propagate to them? In this paper, we develop an approach to estimate the impact of linkage uncertainty on derived estimates by using a re-sampling approach. For each iteration of the re-sampling, pairs are classified as links or non-links by Monte-Carlo assignment to model estimated true match probabilities. By looking at the range of estimates produced in a series of re-samples, we can estimate the distribution of derived statistics under the prevailing incidence of linkage uncertainty. For this analysis we use the results of linking the 2014 National Hospital Care Survey to the National Death Index performed at the National Center for Health Statistics. We assess the precision of hospital-level death rate estimates
    • 

    corecore